Recent work attributes progress in NLP to large language models (LMs) with increased model size and large quantities of pretraining data. Despite this, current state-of-the-art LMs for Hebrew are both under-parameterized and under-trained compared to LMs in other languages. Additionally, previous work on pretrained Hebrew LMs focused on encoder-only models. While the encoder-only architecture is beneficial for classification tasks, it does not cater well for sub-word prediction tasks, such as Named Entity Recognition, when considering the morphologically rich nature of Hebrew. In this paper we argue that sequence-to-sequence generative architectures are more suitable for LLMs in the case of morphologically rich languages (MRLs) such as Hebrew. We demonstrate that by casting tasks in the Hebrew NLP pipeline as text-to-text tasks, we can leverage powerful multilingual, pretrained sequence-to-sequence models as mT5, eliminating the need for a specialized, morpheme-based, separately fine-tuned decoder. Using this approach, our experiments show substantial improvements over previously published results on existing Hebrew NLP benchmarks. These results suggest that multilingual sequence-to-sequence models present a promising building block for NLP for MRLs.
translated by 谷歌翻译
Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution. Our method supports existing off-the-shelf models without retraining or architecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration compared to the standard T5X implementation, with identical outputs.
translated by 谷歌翻译
Denoising diffusion models (DDMs) have led to staggering performance leaps in image generation, editing and restoration. However, existing DDMs use very large datasets for training. Here, we introduce a framework for training a DDM on a single image. Our method, which we coin SinDDM, learns the internal statistics of the training image by using a multi-scale diffusion process. To drive the reverse diffusion process, we use a fully-convolutional light-weight denoiser, which is conditioned on both the noise level and the scale. This architecture allows generating samples of arbitrary dimensions, in a coarse-to-fine manner. As we illustrate, SinDDM generates diverse high-quality samples, and is applicable in a wide array of tasks, including style transfer and harmonization. Furthermore, it can be easily guided by external supervision. Particularly, we demonstrate text-guided generation from a single image using a pre-trained CLIP model.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We propose an algorithm for learning a conditional generative model of a molecule given a target. Specifically, given a receptor molecule that one wishes to bind to, the conditional model generates candidate ligand molecules that may bind to it. The distribution should be invariant to rigid body transformations that act $\textit{jointly}$ on the ligand and the receptor; it should also be invariant to permutations of either the ligand or receptor atoms. Our learning algorithm is based on a continuous normalizing flow. We establish semi-equivariance conditions on the flow which guarantee the aforementioned invariance conditions on the conditional distribution. We propose a graph neural network architecture which implements this flow, and which is designed to learn effectively despite the vast differences in size between the ligand and receptor. We evaluate our method on the CrossDocked2020 dataset, attaining a significant improvement in binding affinity over competing methods.
translated by 谷歌翻译
进化计算(EC)已被证明能够快速训练深人造神经网络(DNNS)来解决增强学习(RL)问题。虽然遗传算法(GA)非常适合利用既不具有欺骗性也不稀疏的奖励功能,但当奖励函数是这些功能时,它会挣扎。为此,在某些情况下,新颖的搜索(NS)已被证明能够超越梯度跟随优化器,而在其他情况下则表现不佳。我们提出了一种新算法:探索 - 探索$ \ gamma $ - 适应学习者($ e^2 \ gamma al $或eyal)。通过保留动态大小的寻求新颖的代理商的利基市场,该算法可以维持人口多样性,并在可能的情况下利用奖励信号并探索其他奖励信号。该算法将GA的剥削能力和NS的勘探能力结合在一起,同时保持其简单性和优雅性。我们的实验表明,在大多数情况下,Eyal在与GA相当的情况下都胜过NS - 在某些情况下,它可以均优于两者。 Eyal还允许用其他算法(例如演化策略和惊喜搜索)代替利用组件(GA)和探索组件(NS)(NS),从而为未来的研究打开了大门。
translated by 谷歌翻译
自然语言处理(NLP)算法正在迅速改善,但在应用于分布的示例时通常会挣扎。减轻域间隙的突出方法是域的适应性,其中在源域上训练的模型适应了新的目标域。我们提出了一种新的学习设置,``从头开始适应域名'',我们认为这对于以隐私的方式将NLP的覆盖范围扩展到敏感域至关重要。在此设置中,我们旨在有效地从一组源域中注释数据,以便训练有素的模型在敏感的目标域上表现良好,从而从中无法从中获得注释。我们的研究将这种具有挑战性的设置的几种方法比较,从数据选择和域适应算法到主动学习范式,在两个NLP任务上:情感分析和命名实体识别。我们的结果表明,使用上述方法可以缓解域间隙,并将其组合进一步改善结果。
translated by 谷歌翻译
图中最短的路径问题是理论和应用的基石。现有的工作是边缘重量访问时间,但通常会忽略边缘重量计算时间。在本文中,我们提出了一个加权有向图的广义框架,其中每个边缘的成本可以通过多个估计器动态估计,该估计器提供不同的成本范围和运行时间。这引发了几个通用的最短路径问题,可以优化路径成本的不同方面,同时需要保证成本不确定性,从而为建模现实问题提供了更好的基础。我们提供完整的,任何时间来解决这些问题,并提供解决方案质量的保证。
translated by 谷歌翻译
高能量密度物理(HEDP)实验通常涉及在低密度泡沫内部传播的动态波 - 前。这种效果会影响其密度,因此影响其透明度。泡沫生产中的一个常见问题是产生有缺陷的泡沫。需要有关其尺寸和同质性的准确信息来对泡沫的质量进行分类。因此,这些参数使用3D测量激光共聚焦显微镜进行表征。对于每个泡沫,拍摄五个图像:两张2D图像,代表顶部和底部泡沫平面和3D扫描的侧面横截面的三张图像。专家必须通过图像集进行手动对泡沫质量进行分类的复杂,苛刻和疲惫的工作,然后才能确定是否可以在实验中使用泡沫。目前,质量有两个二元级别的正常与缺陷。同时,通常需要专家来对正常缺陷的子类别进行分类,即有缺陷但可能需要实验的泡沫。由于不确定的判断,该子类是有问题的,这主要是直观的。在这项工作中,我们提出了一种新颖的最先进的多视图深度学习分类模型,该模型通过自动确定泡沫的质量分类并因此有助于专家来模仿物理学家的观点。我们的模型在上表面和下表面泡沫平面上达到了86 \%的精度,整个集合中达到了82 \%,这表明了该问题的有趣启发式方法。这项工作中的一个重大价值是能够回归泡沫质量而不是二进制扣除,甚至可以在视觉上解释该决定。本工作中使用的源代码以及其他相关来源可在以下网址获得:https://github.com/scientific-computing-lab-nrcn/multi-view-foams.git
translated by 谷歌翻译
对脑外伤(TBI)患者的准确预后很难为治疗,患者管理和长期护理提供信息至关重要。年龄,运动和学生反应性,缺氧和低血压以及计算机断层扫描(CT)的放射学发现等患者特征已被确定为TBI结果预测的重要变量。 CT是临床实践中选择的急性成像方式,因为其获取速度和广泛的可用性。但是,这种方式主要用于定性和半定量评估,例如马歇尔评分系统,该系统容易受到主观性和人为错误。这项工作探讨了使用最先进的,深度学习的TBI病变分割方法从常规获得的医院入院CT扫描中提取的成像生物标志物的预测能力。我们使用病变体积和相应的病变统计作为扩展TBI结果预测模型的输入。我们将我们提出的功能的预测能力与马歇尔分数进行比较,并与经典的TBI生物标志物配对。我们发现,在预测不利的TBI结果时,自动提取的定量CT功能的性能与Marshall分数相似或更好。利用自动地图集对齐,我们还确定额叶外病变是不良预后的重要指标。我们的工作可能有助于更好地理解TBI,并提供有关如何使用自动化神经影像分析来改善TBI后预测的新见解。
translated by 谷歌翻译